It is well believed that the higher uncertainty in a word of the caption, the more inter-correlated context information is required to determine it. However, current image captioning methods usually consider the generation of all words in a sentence sequentially and equally. In this paper, we propose an uncertainty-aware image captioning framework, which parallelly and iteratively operates insertion of discontinuous candidate words between existing words from easy to difficult until converged. We hypothesize that high-uncertainty words in a sentence need more prior information to make a correct decision and should be produced at a later stage. The resulting non-autoregressive hierarchy makes the caption generation explainable and intuitive. Specifically, we utilize an image-conditioned bag-of-word model to measure the word uncertainty and apply a dynamic programming algorithm to construct the training pairs. During inference, we devise an uncertainty-adaptive parallel beam search technique that yields an empirically logarithmic time complexity. Extensive experiments on the MS COCO benchmark reveal that our approach outperforms the strong baseline and related methods on both captioning quality as well as decoding speed.
translated by 谷歌翻译
Quantum computing is a game-changing technology for global academia, research centers and industries including computational science, mathematics, finance, pharmaceutical, materials science, chemistry and cryptography. Although it has seen a major boost in the last decade, we are still a long way from reaching the maturity of a full-fledged quantum computer. That said, we will be in the Noisy-Intermediate Scale Quantum (NISQ) era for a long time, working on dozens or even thousands of qubits quantum computing systems. An outstanding challenge, then, is to come up with an application that can reliably carry out a nontrivial task of interest on the near-term quantum devices with non-negligible quantum noise. To address this challenge, several near-term quantum computing techniques, including variational quantum algorithms, error mitigation, quantum circuit compilation and benchmarking protocols, have been proposed to characterize and mitigate errors, and to implement algorithms with a certain resistance to noise, so as to enhance the capabilities of near-term quantum devices and explore the boundaries of their ability to realize useful applications. Besides, the development of near-term quantum devices is inseparable from the efficient classical simulation, which plays a vital role in quantum algorithm design and verification, error-tolerant verification and other applications. This review will provide a thorough introduction of these near-term quantum computing techniques, report on their progress, and finally discuss the future prospect of these techniques, which we hope will motivate researchers to undertake additional studies in this field.
translated by 谷歌翻译
Recently, vector quantized autoregressive (VQ-AR) models have shown remarkable results in text-to-image synthesis by equally predicting discrete image tokens from the top left to bottom right in the latent space. Although the simple generative process surprisingly works well, is this the best way to generate the image? For instance, human creation is more inclined to the outline-to-fine of an image, while VQ-AR models themselves do not consider any relative importance of each component. In this paper, we present a progressive denoising model for high-fidelity text-to-image image generation. The proposed method takes effect by creating new image tokens from coarse to fine based on the existing context in a parallel manner and this procedure is recursively applied until an image sequence is completed. The resulting coarse-to-fine hierarchy makes the image generation process intuitive and interpretable. Extensive experiments demonstrate that the progressive model produces significantly better results when compared with the previous VQ-AR method in FID score across a wide variety of categories and aspects. Moreover, the text-to-image generation time of traditional AR increases linearly with the output image resolution and hence is quite time-consuming even for normal-size images. In contrast, our approach allows achieving a better trade-off between generation quality and speed.
translated by 谷歌翻译
多年来,Yolo系列一直是有效对象检测的事实上的行业级别标准。尤洛社区(Yolo Community)绝大多数繁荣,以丰富其在众多硬件平台和丰富场景中的使用。在这份技术报告中,我们努力将其限制推向新的水平,以坚定不移的行业应用心态前进。考虑到对真实环境中速度和准确性的多种要求,我们广泛研究了行业或学术界的最新对象检测进步。具体而言,我们从最近的网络设计,培训策略,测试技术,量化和优化方法中大量吸收了思想。最重要的是,我们整合了思想和实践,以在各种规模上建立一套可供部署的网络,以适应多元化的用例。在Yolo作者的慷慨许可下,我们将其命名为Yolov6。我们还向用户和贡献者表示热烈欢迎,以进一步增强。为了了解性能,我们的Yolov6-N在NVIDIA TESLA T4 GPU上以1234 fps的吞吐量在可可数据集上击中35.9%的AP。 Yolov6-S在495 fps处的43.5%AP罢工,在相同规模〜(Yolov5-S,Yolox-S和Ppyoloe-S)上超过其他主流探测器。我们的量化版本的Yolov6-S甚至在869 fps中带来了新的43.3%AP。此外,与其他推理速度相似的检测器相比,Yolov6-m/L的精度性能(即49.5%/52.3%)更好。我们仔细进行了实验以验证每个组件的有效性。我们的代码可在https://github.com/meituan/yolov6上提供。
translated by 谷歌翻译
Panoptic叙事接地(PNG)是一项新的任务,其目标是通过静止图像的密集叙事标题来分割事物和内容类别的视觉对象。先前的两阶段方法首先提取了通过现成的全盘分割模型提取分割区域的建议,然后进行粗糙的区域短语匹配,以将每个名词短语的候选区域接地。但是,两阶段的管道通常受到第一阶段低质量建议的性能限制,以及由区域特征池的损失以及为事物和东西类别设计的复杂策略引起的空间细节。为了减轻这些缺点,我们提出了一个单阶段的端到端像素匹配网络(PPMN),该网络将每个短语与其相应的像素直接匹配,而不是区域建议,并通过简单组合输出全段段。因此,我们的模型可以从密集注释的像素色素对的监督而不是稀疏的区域短语对中利用足够,更精细的跨模式语义对应关系。此外,我们还提出了与语言兼容的像素聚合(LCPA)模块,以进一步通过多轮修补剂增强短语特征的判别能力,该简化为每个短语选择最兼容的像素以适应相应的视觉上下文。广泛的实验表明,我们的方法在PNG基准测试中实现了新的最新性能,并具有4.0个绝对平均召回率增长。
translated by 谷歌翻译
最近,卷积增强的变压器(构象异构体)在自动语音识别(ASR)中显示出令人鼓舞的结果,表现优于先前发表的最佳变压器传感器。在这项工作中,我们认为编码器和解码器中每个块的输出信息并不完全包容,换句话说,它们的输出信息可能是互补的。我们研究如何以参数效率的方式利用每个块的互补信息,并且可以预期这可能会导致更强的性能。因此,我们提出了刻板的变压器以进行语音识别,名为BlockFormer。我们已经实现了两个块集合方法:块输出的基本加权总和(基本WSBO),以及挤压和激气模块到块输出的加权总和(SE-WSBO)。实验已经证明,阻滞剂在Aishell-1上大大优于基于最新的构象模型,我们的模型在不使用语言模型的情况下达到了4.35 \%的CER,并且在4.10 \%上具有外部语言模型的4.10 \%测试集。
translated by 谷歌翻译
现有的图像字幕的方法通常从左到右生成句子逐字,并在本地上下文中受到限制,包括给定的图像和历史记录生成的单词。在解码过程中,有许多研究目的是利用全球信息,例如迭代改进。但是,它仍然探讨了如何有效,有效地纳入未来的环境。为了回答这个问题,受到非自动回归图像字幕(NAIC)的启发,可以通过修改后的掩码操作利用两侧关系,我们的目标是将此进步嫁接到常规的自动回归图像字幕(AIC)模型,同时保持推理效率而无需进行推理效率额外的时间成本。具体而言,首先对AIC和NAIC模型结合了共享的视觉编码器,迫使视觉编码器包含足够有效的未来上下文。然后鼓励AIC模型捕获NAIC模型在其不自信的单词上互换的跨层互换的因果动态,该单词遵循教师学生的范式,并通过分配校准训练目标进行了优化。经验证据表明,我们所提出的方法清楚地超过了自动指标和人类评估的最新基线,对MS COCO基准测试。源代码可在以下网址获得:https://github.com/feizc/future-caption。
translated by 谷歌翻译
变形金刚在NLP和计算机视觉上实现了突破,最近开始在自动驾驶汽车(AV)的轨迹预测中表现出有希望的表现。如何有效地对自我代理与其他道路和动态对象之间的交互关系建模仍然对标准注意模块仍然具有挑战性。在这项工作中,我们提出了一个类似变压器的架构模块MNM网络,该网络配备了新型掩盖的目标调节训练程序,用于AV轨迹预测。最终的模型名为高尔夫球手,取得了最先进的性能,在2022 Waymo Open DataSet Motion Predict挑战中赢得了第二名,并根据Minade排名第一。
translated by 谷歌翻译
引用视频对象细分旨在预测视频中自然语言表达式引用的对象的前景标签。先前的方法要么取决于3D convnet,要么将附加的2D转向器作为编码器,以提取混合时空特征。但是,由于在解码阶段发生的延迟和隐式时空相互作用,这些方法遭受了空间错位或虚假分散因素的影响。为了解决这些限制,我们提出了一个语言桥梁的双链传输(LBDT)模块,该模块将语言用作中间桥,以在编码阶段早期完成显式和适应性的时空交互。具体地,在时间编码器中进行了交叉模式的注意,将单词和空间编码器引用以汇总和传递与语言相关的运动和外观信息。此外,我们还提出了在解码阶段的双边通道激活(BCA)模块,以通过通道激活进一步降低并突出时空一致的特征。广泛的实验表明,我们的方法在四个流行的基准测试基准上获得了新的最新性能,分别在A2D句子和J-HMDB句子上获得了6.8%和6.9%的绝对AP收益,同时消耗了大约7倍的计算机开销。
translated by 谷歌翻译
我们提出了一个简单而有效的完全卷积的一阶段3D对象检测器,用于自主驾驶场景的LIDAR点云,称为FCOS-LIDAR。与使用鸟眼视图(BEV)的主要方法不同,我们提出的检测器从激光雷达点的范围视图(RV,又称范围图像)中检测对象。由于范围视图的紧凑性和与LIDAR传感器在自动驾驶汽车上的采样过程的兼容性,因此可以通过仅利用Vanilla 2D卷积来实现基于范围视图的对象检测器,而脱离了基于BEV的方法,这些方法通常涉及复杂的方法体素化操作和稀疏卷积。我们首次表明,仅具有标准2D卷积的基于RV的3D检测器就可以实现与基于BEV的最新检测器相当的性能,同时更快,更简单。更重要的是,几乎所有以前的基于范围视图的检测器都只关注单帧点云,因为将多帧点云融合到单个范围视图中是具有挑战性的。在这项工作中,我们通过新颖的范围视图投影机制解决了这个具有挑战性的问题,并首次展示了基于范围视图的检测器融合多帧点云的好处。关于Nuscenes的广泛实验表明了我们提出的方法的优越性,我们认为我们的工作可以有力证明基于RV的3D检测器可以与当前基于BEV的主流探测器相比。
translated by 谷歌翻译